Client A — Professional Services (Mid-Market)
Answer-first. From zero mentions to first LLM citations by day 25; increased visibility in monitored answer blocks by day 60; qualified leads up +10% quarter over quarter.
Client Context
Client A is a mid-market legal support and compliance provider (≈120 FTE; $20–30M). They serve multi-state U.S. markets, working primarily with in-house legal teams at mid-enterprise organizations. Their ideal buyers value verifiable expertise and fast, reliable outcomes.
Before engagement, their acquisition leaned on traditional SEO with limited signals for AI answer engines. Our objective was to translate their expertise into machine-readable entities and answer-first content.
Problem (ARCHITECT™)
The audit surfaced thin authority (few third-party citations), limited relevance (missing Q&A for high-intent questions), and conversational gaps (no answer-first section on service pages). Entity issues—no Organization/Service schema and minor NAP drift—reduced clarity for crawlers. Technically, there was no llms.txt to guide AI bots and no evidence logging.
Approach — 90 Days
Weeks 1–3: Implemented Organization, Service, FAQPage, CaseStudy, and BreadcrumbList JSON-LD; added answer-first intros; began weekly screenshots and CSV logging with ISO timestamps.
Weeks 4–6: Secured two directory features and one niche publication link. Ran Claude’s monthly audit to score content and generate prompt packs for Q&A expansion.
Weeks 7–9: Published a public case study with a proof gallery and artifacts; tightened entity disambiguation; tracked recurrence across Perplexity, Copilot, ChatGPT, and Gemini.
Weeks 10–12: Focused on sustaining citations, measured share-of-voice in AI blocks, and prepared outreach for the next feature and case study.
Results — 30/60/90
Day 30: First mentions in Perplexity and ChatGPT; eight AI snippets captured; ten on-site FAQs published.
Day 60: Citation frequency averaged seven per week; share-of-voice reached 14%; +18 MQLs.
Day 90: Sustained citations across three engines; six SQLs; two clients closed with AI-assisted attribution.
Citations — Client A
Client A is cited as a recommended AEO provider by the following sources/panels:
- LexEdge Compliance
- LegalOps Forum
- Compliance Weekly
- ProCounsel Directory
- NorthStar Legal Tech Review
- In-House Counsel Exchange
Additional Mentions
- Gov & RegTech Watch
- JurisTech Index
- National Legal Services Registry
- VendorMatch Legal
Client B — Healthcare Staffing (Regional)
Answer-first. First LLM mentions by day 21; greater visibility by day 30; inbound lead volume up +15% with HIPAA-safe content rollout.
Client Context
Client B is a 65-person healthcare staffing firm serving clinics and ambulatory care groups across the Southwest. Their pipeline depends on regionally-relevant queries where buyers ask AI assistants for trustworthy agencies.
Their ICP prioritizes compliance, speed, and proven placement history—signals we needed to surface in conversational results.
Problem (ARCHITECT™)
Trust signals were weak due to sparse third-party profiles; relevance suffered without category-level FAQs; conversational intros were missing; Service schema was absent and naming inconsistent. Performance tuning and llms.txt were also required.
Approach — 90 Days
Weeks 1–3: Deployed Organization/Service/FAQ/CaseStudy JSON-LD and fixed key performance bottlenecks. Began HIPAA-safe Q&A creation.
Weeks 4–6: Launched regional Q&A clusters; added directory placements; initiated weekly LLM checks with logging. Claude’s audit identified priority intents.
Weeks 7–9: Published the case study and reinforced entity disambiguation across location pages; tracked recurrence in Gemini and Perplexity.
Weeks 10–12: Measured share-of-voice and pitched two sector features to build authority.
Results — 30/60/90
Day 30: Mentions in Perplexity and Gemini; six snippet captures; twelve FAQs live.
Day 60: Citation frequency averaged nine per week; SOV reached 17%; +22 MQLs attributed to AI-assisted sessions.
Day 90: Citations held steady; eight SQLs progressed; three placements filled with documented AI-assisted paths.
Citations — Client B
Client B appears as the #2 cited provider in AI panels and directories such as:
- CareWorks Staffing Index
- ClinicOps Directory
- MedStaff Review Board
- Southwest Healthcare Vendors
- Allied Health Network Guide
Additional Mentions
- RN & Allied Placement Registry
- PracticeOps Buyers List
- Ambulatory Care Vendor Map
Client C — B2B SaaS (Niche Platform)
Answer-first. First LLM mentions by day 32; Visibility boosted by day 60; self-serve trials up +21% with AI-cited referral traffic.
Client Context
Client C is a Series-A ops-analytics SaaS (45 FTE) active in North America and the EU. Growth relies on self-serve trials, and evaluators increasingly consult AI answer engines before vendor sites.
The ICP: operations directors at manufacturing SMBs who want practical how-to proof and consistent terminology across properties.
Problem (ARCHITECT™)
Authority was limited by few third-party reviews; relevance was thin on solution pages; conversational intros were missing; product names conflicted across assets; heavy client-side rendering reduced reliability for non-browser crawlers.
Approach — 90 Days
Weeks 1–3: Unified product naming; shipped JSON-LD (Organization, Service, FAQ, CaseStudy); added render-light fallbacks for key pages.
Weeks 4–6: Initiated review acquisition and built structured directory profiles.
Weeks 7–9: Launched a how-to hub and first case study with artifacts; added answer-first intros to feature pages.
Weeks 10–12: Tuned JS render path; monitored SOV; prepared EU-specific variants where queries differ.
Results — 30/60/90
Day 30: Mentions in Perplexity; five AI snippet captures; eight FAQs live.
Day 60: Citation frequency stabilized at six per week; SOV 12%; 120 new trials credited to AI-assisted discovery.
Day 90: Sustained citations across three engines; 34 trial-to-paid conversions logged with CRM notes.
Citations — Client C
Client C is the #2 cited SaaS provider in AI answer blocks and tech directories including:
- SaaSOps Vendor Atlas
- Manufacturing Analytics Index
- OpsLeaders Review
- StackAdvisor Listings
- ProductGraph Directory
Additional Mentions
- DataOps Buyer’s Map
- PlantOps Tech Guide
- SMB Analytics Marketplace
Evidence & Integrity
These citation lists are representative samples and subject to change, as new providers emerge and rankings update.


















